A penalized likelihood based pattern classification algorithm
نویسندگان
چکیده
منابع مشابه
A penalized likelihood based pattern classification algorithm
Penalized likelihood is a general approach whereby an objective function is defined, consisting of the log likelihood of the data minus some term penalizing non-smooth solutions. Subsequently, this objective function is maximized, yielding a solution that achieves some sort of trade-off between the faithfulness and the smoothness of the fit. Most work on that topic focused on the regression pro...
متن کاملPattern Classification Using a Penalized Likelihood Method
Penalized likelihood is a well-known theoretically justified approach that has recently attracted attention by the machine learning society. The objective function of the Penalized likelihood consists of the log likelihood of the data minus some term penalizing non-smooth solutions. Subsequently, maximizing this objective function would lead to some sort of trade-off between the faithfulness an...
متن کاملA fast image reconstruction algorithm based on penalized-likelihood estimate.
Statistical iterative methods for image reconstruction like maximum likelihood expectation maximization (ML-EM) are more robust and flexible than analytical inversion methods and allow for accurately modeling the counting statistics and the photon transport during acquisition. They are rapidly becoming the standard for image reconstruction in emission computed tomography. The maximum likelihood...
متن کاملMaximum likelihood, profile likelihood, and penalized likelihood: a primer.
The method of maximum likelihood is widely used in epidemiology, yet many epidemiologists receive little or no education in the conceptual underpinnings of the approach. Here we provide a primer on maximum likelihood and some important extensions which have proven useful in epidemiologic research, and which reveal connections between maximum likelihood and Bayesian methods. For a given data set...
متن کاملPenalized Least Squares and Penalized Likelihood
where pλ(·) is the penalty function. Best subset selection corresponds to pλ(t) = (λ/2)I(t 6= 0). If we take pλ(t) = λ|t|, then (1.2) becomes the Lasso problem (1.1). Setting pλ(t) = at + (1 − a)|t| with 0 ≤ a ≤ 1 results in the method of elastic net. With pλ(t) = |t| for some 0 < q ≤ 2, it is called bridge regression, which includes the ridge regression as a special case when q = 2. Some penal...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Pattern Recognition
سال: 2009
ISSN: 0031-3203
DOI: 10.1016/j.patcog.2009.04.016